Goto

Collaborating Authors

 event history


59b1deff341edb0b76ace57820cef237-AuthorFeedback.pdf

Neural Information Processing Systems

Indeed, the results in Table 1, which shows13 the mean absolute percentage errors (MAPE), demonstrates this. The ac-14 curacy of neural ODE for the Poisson process is on par with our neural15 JSDE. However, for the Hawkes process (Exponential), Hawkes process16 (Power-Law), and self-correcting process, neural ODE gives much larger17 predictions errors. Forthesocial/medicaldatasets,weuseda20/64-24 dimensional latent state and parameterized the functions with two-hidden-layer MLPs with 32/64 hidden units. The time series modeling software that we used is designed for long event sequences and ignores the idle time after31 thelastevent.


How to Set Up a Google Home Security System: Best Cameras, Doorbells, and Other Devices

WIRED

If you want to secure your home, Google's Nest range is one of the smartest and easiest ways. All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. There's no need for an expensive, professionally installed home security system for a little peace of mind. You can keep tabs on your home when you're away, check in on your kids or pets, and discourage intruders with a few well-placed security cameras and connected devices.


MEMTRACK: Evaluating Long-Term Memory and State Tracking in Multi-Platform Dynamic Agent Environments

Deshpande, Darshan, Gangal, Varun, Mehta, Hersh, Kannappan, Anand, Qian, Rebecca, Wang, Peng

arXiv.org Artificial Intelligence

Recent works on context and memory benchmarking have primarily focused on conversational instances but the need for evaluating memory in dynamic enterprise environments is crucial for its effective application. We introduce MEMTRACK, a benchmark designed to evaluate long-term memory and state tracking in multi-platform agent environments. MEMTRACK models realistic organizational workflows by integrating asynchronous events across multiple communication and productivity platforms such as Slack, Linear and Git. Each benchmark instance provides a chronologically platform-interleaved timeline, with noisy, conflicting, cross-referring information as well as potential codebase/file-system comprehension and exploration. Consequently, our benchmark tests memory capabilities such as acquistion, selection and conflict resolution. We curate the MEMTRACK dataset through both manual expert driven design and scalable agent based synthesis, generating ecologically valid scenarios grounded in real world software development processes. We introduce pertinent metrics for Correctness, Efficiency, and Redundancy that capture the effectiveness of memory mechanisms beyond simple QA performance. Experiments across SoTA LLMs and memory backends reveal challenges in utilizing memory across long horizons, handling cross-platform dependencies, and resolving contradictions. Notably, the best performing GPT-5 model only achieves a 60\% Correctness score on MEMTRACK. This work provides an extensible framework for advancing evaluation research for memory-augmented agents, beyond existing focus on conversational setups, and sets the stage for multi-agent, multi-platform memory benchmarking in complex organizational settings


agree the neural JSDE is an interesting model that introduces discrete events into continuous latent ODE framework

Neural Information Processing Systems

We thank the reviewers for their careful reading of the paper. Table 1: Neural ODE / JSDE predicted conditional intensity error. MAPE ODE JSDE Poisson 1.2 1.3 Hawkes (E) 172.0 5.9 Hawkes (PL) 91.4 17.1 Self-Correcting 27.2 9.3 expect the model to only work well for Poisson process which does not We will add the following details to the paper. Reviewer 1 also noted the Poisson dataset does not fit well to the Poisson process. We find that using longer Poisson sequences remedies this issue.


Latent Logic Tree Extraction for Event Sequence Explanation from LLMs

Song, Zitao, Yang, Chao, Wang, Chaojie, An, Bo, Li, Shuang

arXiv.org Artificial Intelligence

Modern high-stakes systems, such as healthcare or robotics, often generate vast streaming event sequences. Our goal is to design an efficient, plug-and-play tool to elicit logic tree-based explanations from Large Language Models (LLMs) to provide customized insights into each observed event sequence. Built on the temporal point process model for events, our method employs the likelihood function as a score to evaluate generated logic trees. We propose an amortized Expectation-Maximization (EM) learning framework and treat the logic tree as latent variables. In the E-step, we evaluate the posterior distribution over the latent logic trees using an LLM prior and the likelihood of the observed event sequences. LLM provides a high-quality prior for the latent logic trees, however, since the posterior is built over a discrete combinatorial space, we cannot get the closed-form solution. We propose to generate logic tree samples from the posterior using a learnable GFlowNet, which is a diversity-seeking generator for structured discrete variables. The M-step employs the generated logic rules to approximate marginalization over the posterior, facilitating the learning of model parameters and refining the tunable LLM prior parameters. In the online setting, our locally built, lightweight model will iteratively extract the most relevant rules from LLMs for each sequence using only a few iterations. Empirical demonstrations showcase the promising performance and adaptability of our framework.


Tailor: Size Recommendations for High-End Fashion Marketplaces

Candeias, Alexandre, Silva, Ivo, Sousa, Vitor, Marcelino, José

arXiv.org Artificial Intelligence

In the ever-changing and dynamic realm of high-end fashion marketplaces, providing accurate and personalized size recommendations has become a critical aspect. Meeting customer expectations in this regard is not only crucial for ensuring their satisfaction but also plays a pivotal role in driving customer retention, which is a key metric for the success of any fashion retailer. We propose a novel sequence classification approach to address this problem, integrating implicit (Add2Bag) and explicit (ReturnReason) user signals. Our approach comprises two distinct models: one employs LSTMs to encode the user signals, while the other leverages an Attention mechanism. Our best model outperforms SFNet, improving accuracy by 45.7%. By using Add2Bag interactions we increase the user coverage by 24.5% when compared with only using Orders. Moreover, we evaluate the models' usability in real-time recommendation scenarios by conducting experiments to measure their latency performance.


Modeling Inter-Dependence Between Time and Mark in Multivariate Temporal Point Processes

Waghmare, Govind, Debnath, Ankur, Asthana, Siddhartha, Malhotra, Aakarsh

arXiv.org Artificial Intelligence

Temporal Point Processes (TPP) are probabilistic generative frameworks. They model discrete event sequences localized in continuous time. Generally, real-life events reveal descriptive information, known as marks. Marked TPPs model time and marks of the event together for practical relevance. Conditioned on past events, marked TPPs aim to learn the joint distribution of the time and the mark of the next event. For simplicity, conditionally independent TPP models assume time and marks are independent given event history. They factorize the conditional joint distribution of time and mark into the product of individual conditional distributions. This structural limitation in the design of TPP models hurt the predictive performance on entangled time and mark interactions. In this work, we model the conditional inter-dependence of time and mark to overcome the limitations of conditionally independent models. We construct a multivariate TPP conditioning the time distribution on the current event mark in addition to past events. Besides the conventional intensity-based models for conditional joint distribution, we also draw on flexible intensity-free TPP models from the literature. The proposed TPP models outperform conditionally independent and dependent models in standard prediction tasks. Our experimentation on various datasets with multiple evaluation metrics highlights the merit of the proposed approach.


Nest Doorbell review: Google's porch sentinel shines

PCWorld

It's been three years since Google launched the Nest Hello, a wired video doorbell with facial recognition that helped set the standard for smart doorbells. In that time, many competitors have appeared, but few have come close to the quality and reliability of that device. The Nest Doorbell (battery) is a Google-made video doorbell that can run on battery power (it can also operate on wired power, if you have that infrastructure and wish to connect it to your existing doorbell chime). In our tests the device performed excellently, didn't give any problems, and proved itself to be a worthy sister device to the original Nest Hello. A large, round black circle with a camera in its center is at the top of the new Nest doorbell. A small LED below that indicates when the camera is recording or processing video.


Nooie's new smart cam offers 360 degrees of security for a steal

USATODAY - Tech Top Stories

The Nooie Cam 360 has a rotating, high-def camera that automatically tracks you as you move about the room. Here are the Nooie Cam 360's specs: The Nooie Cam 360 is a budget-friendly indoor home security camera that features motion tracking and, as the name implies, 360-degree rotation. The camera is equipped with a 1080p high-def lens and two 940nm infrared LEDs. It has other smart camera features like two-way audio functionality, night vision, and a status light indicator that can be toggled on or off. Nooie smart alerts are sent when the camera detects sound or motion.


Customer Churn Prediction Using Machine Learning: Main Approaches and Models

#artificialintelligence

Customer retention is one of the primary growth pillars for products with a subscription-based business model. Competition is tough in the SaaS market where customers are free to choose from plenty of providers even within one product category. Several bad experiences – or even one – and a customer may quit. And if droves of unsatisfied customers churn at a clip, both material losses and damage to reputation would be enormous. For this article, we reached out to experts from HubSpot and ScienceSoft to discuss how SaaS companies handle the problem of customer churn with predictive modeling. You will discover approaches and best practices for solving this problem. We'll discuss collecting data about client relationship with a brand, characteristics of customer behavior that correlate the most with churn and explore the logic behind selecting the best-performing machine learning models. Customer churn (or customer attrition) is a tendency of customers to abandon a brand and stop being a paying client of a particular business. The percentage of customers that discontinue using a company's products or services during a particular time period is called a customer churn (attrition) rate. One of the ways to calculate a churn rate is to divide the number of customers lost during a given time interval by the number of acquired customers, and then multiply that number by 100 percent.